In this paper we explore the task of modeling (semi) structured object sequences; in particular we focus our attention on the problem of developing a structure-aware input representation for such sequences. In such sequences, we assume that each structured object is represented by a set of key-value pairs which encode the attributes of the structured object. Given a universe of keys, a sequence of structured objects can then be viewed as an evolution of the values for each key, over time. We encode and construct a sequential representation using the values for a particular key (Temporal Value Modeling - TVM) and then self-attend over the set of key-conditioned value sequences to a create a representation of the structured object sequence (Key Aggregation - KA). We pre-train and fine-tune the two components independently and present an innovative training schedule that interleaves the training of both modules with shared attention heads. We find that this iterative two part-training results in better performance than a unified network with hierarchical encoding as well as over, other methods that use a {\em record-view} representation of the sequence \cite{de2021transformers4rec} or a simple {\em flattened} representation of the sequence. We conduct experiments using real-world data to demonstrate the advantage of interleaving TVM-KA on multiple tasks and detailed ablation studies motivating our modeling choices. We find that our approach performs better than flattening sequence objects and also allows us to operate on significantly larger sequences than existing methods.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
最近的知识接地对话框方法通过从外部文本文档中包含信息来生成响应。这些方法不需要在训练期间知道确切的文件,并依赖于使用检索系统来从大型索引获取相关文档。用于生成响应的文档被建模为潜在的变量,其先验概率需要估计。诸如rag等型号,在从索引中检索的文档上边缘化文档概率,以定义对端到端优化的日志似然丢失函数。在本文中,我们开发了上述技术的变分方法,据称,我们最大化证据下限(ELBO)。使用三个公开可用的开放式对话数据集的集合,我们展示了与地面真相响应的信息的后部分布如何允许在训练期间更好地逼近客观函数。为了克服与大型知识收集相关的抽样相关的挑战,我们开发了一种高效的方法来近似eLBO。据我们所知,我们是第一个适用于开放式无监督知识接地对话系统的变分培训。
translated by 谷歌翻译
肺癌往往在晚期检测到,导致患者死亡率高。因此,最近的研究集中在早期疾病检测上。肺癌通常首先出现在气道壁的支气管上皮内发生的病变。支气管镜检查是有效无创支气化病变检测的选择程序。特别是,自身荧光支气管镜检查(AFB)区分了正常组织和患病组织的自荧光特性,在AFB视频帧中,病变在AFB视频帧中显得红棕色,而正常组织则为绿色。由于最近的研究表明AFB具有高病变敏感性的能力,因此在标准的支气管镜呼吸道考试中,它已成为一种潜在的关键方法,用于早期肺癌检测。不幸的是,对AFB视频的手动检查非常乏味且容易出错,而有限的努力已花费在可能更健壮的自动AFB病变检测和细分方面。我们提出了一个实时的深度学习体系结构ESFPNET,用于从AFB视频流中对支气管病变的强大检测和分割。该体系结构具有编码器结构,该结构可利用预审计的混合变压器(MIT)编码器和阶段特征金字塔(ESFP)解码器结构。来自肺癌患者气道考试的AFB视频的结果表明,我们的方法分别给出了平均骰子指数和0.782和0.658的IOU值,而处理吞吐量为27帧/秒。这些值优于使用混合变压器或基于CNN的编码器的其他竞争体系结构获得的结果。此外,ETIS-LaribpolypDB数据集的出色性能证明了其对其他域的潜在适用性。
translated by 谷歌翻译
Modern machine learning models are opaque, and as a result there is a burgeoning academic subfield on methods that explain these models' behavior. However, what is the precise goal of providing such explanations, and how can we demonstrate that explanations achieve this goal? Some research argues that explanations should help teach a student (either human or machine) to simulate the model being explained, and that the quality of explanations can be measured by the simulation accuracy of students on unexplained examples. In this work, leveraging meta-learning techniques, we extend this idea to improve the quality of the explanations themselves, specifically by optimizing explanations such that student models more effectively learn to simulate the original model. We train models on three natural language processing and computer vision tasks, and find that students trained with explanations extracted with our framework are able to simulate the teacher significantly more effectively than ones produced with previous methods. Through human annotations and a user study, we further find that these learned explanations more closely align with how humans would explain the required decisions in these tasks. Our code is available at https://github.com/coderpat/learning-scaffold
translated by 谷歌翻译
在尝试“解释”机器学习模型的预测中,研究人员提出了数百种技术,以归因于认为重要的功能的预测。虽然这些归属常常被声称持有改善人类“了解”模型的潜力,但令人惊讶地小的工作明确评估了对这种愿望的进步。在本文中,我们进行了一个众群研究,参与者与欺骗检测模型进行互动,以区分真实和假酒店评论。他们受到模拟新鲜评论模型的挑战,并以降低最初预测的类的概率的目标。成功的操纵将导致对抗性示例。在培训(但不是测试)阶段,突出显示输入跨度以传达Parience。通过我们的评估,我们观察到,对于线性袋式模型,与无解释控制相比,可以在训练期间访问特征系数的参与者能够在测试阶段中更大减少模型置信度。对于基于BERT的分类器,流行的本地解释不会提高它们在无法解释案例上降低模型信心的能力。值得注意的是,当由培训的线性模型的(全局)归属的(全局)归属给出的解释以模仿BERT模型,人们可以有效地操纵模型。
translated by 谷歌翻译
疟疾,一种致命但可治愈的疾病每年索赔数十万人生命。早期和正确的诊断对于避免健康复杂性至关重要,但这取决于昂贵的显微镜和培训专家分析血液涂抹幻灯片的可用性。基于深度学习的方法可能不仅可以降低专家的负担,而且还提高了低成本显微镜的诊断准确性。但是,由于没有合理的大小数据集,这是阻碍的。最具挑战性的方面之一是专家不愿意在低成本显微镜下以低放大率注释数据集。我们提出了一种数据集,以进一步研究低放大率低成本显微镜的疟疾显微镜。我们的大型数据集由来自几种疟疾感染患者的血液涂抹幻灯片的图像组成,通过显微镜在两种不同的成本谱和多个放大倍数中收集。用于在高放大率下通过高成本显微镜收集的图像的定位和寿命分类任务的疟原虫细胞。我们设计了一种机制,将这些注释从高倍率从高倍率转移到低成本显微镜,多倍放大。多个对象探测器和域适配方法作为基准。此外,引入了部分监督的域适配方法以使对象检测器适应从低成本显微镜收集的图像上的工作。该数据集将在发布后公开可用。
translated by 谷歌翻译
虽然许多方法旨在通过突出突出特征来解释预测,但是这些解释服务的目标以及如何评估它们通常不合适。在这项工作中,我们介绍了一个框架,通过在训练教师模型的学生模型上授予学生模型的准确性增益来量化解释的价值。至关重要的是,培训期间学生可以使用解释,但在测试时间不可用。与先前的建议相比,我们的方法不太易于绘制,实现原则,自动,模型 - 无话会的归属。使用我们的框架,我们比较了许多归属方法,用于文本分类和问题应答,并观察不同学生模型架构和学习策略之间的定量差异(在中度到高度)。
translated by 谷歌翻译